摘要 :
Fuzzy C-Means (FCM) is a popular technique for clustering of data. It combines the concepts of K-Means algorithm and Fuzzy set theory. However, FCM faces the challenges of running into a local optimal value, and of producing resul...
展开
Fuzzy C-Means (FCM) is a popular technique for clustering of data. It combines the concepts of K-Means algorithm and Fuzzy set theory. However, FCM faces the challenges of running into a local optimal value, and of producing results which are sensitive to initialisation conditions. To solve these problems, there has been prior work which incorporates Ant Colony Optimisation (ACO) into the conventional FCM algorithm. The authors of this paper find that though the FCM-ACO algorithm is a definite improvement over the traditional FCM, there is still scope for improving the scalability and accuracy of the system. The authors propose using a Multi Round Sampling (MRS) technique along with Ant colony Optimisation. The proposed algorithm allows us to cluster the dataset without considering it entirely, hence allowing for a more space and time efficient system. This makes the system highly scalable and hence suitable for large datasets. Moreover, extensive experiments on several publicly available datasets, both large and small, prove that the proposed algorithm of Multi Round Sampling of Ant Colony based Fuzzy C-Means (MRSA-FCM) gives superior clustering results, over the FCM and FCM-ACO systems.
收起
摘要 :
This paper presents a novel approach in optimization of semi-active suspension system based on ride comfort and road handling characteristics. Semi-active suspension is capable of providing both ride comfort and road handling of t...
展开
This paper presents a novel approach in optimization of semi-active suspension system based on ride comfort and road handling characteristics. Semi-active suspension is capable of providing both ride comfort and road handling of the vehicle by optimization of various parameters. The model used for study is quarter car semi-active model. The entire analysis part of the paper has been carried out using Heat Transfer Search and Teaching-Learning based optimization algorithm. It also throws some light on comparison of semi-active and passive suspension based on their performance on similar road conditions by plotting graph of linear acceleration of sprung mass with respect to time.
收起
摘要 :
Incomplete data has emerged as a prominent problem in the fields of machine learning, big data and various other academic studies. Due to the surge in deep learning techniques for problem-solving, in this paper, authors have propo...
展开
Incomplete data has emerged as a prominent problem in the fields of machine learning, big data and various other academic studies. Due to the surge in deep learning techniques for problem-solving, in this paper, authors have proposed a deep learning-metaheuristic approach to combat the problem of imputing missing data. The proposed approach (DL-GSA) makes use of the nature inspired metaheuristic, Gravitational search algorithm, in combination with a deep-autoencoder and performs better than existing methods in terms of both accuracy and time. Owing to these improvements, DL-GSA has wider applications in both time and accuracy sensitive areas like imputation of scientific and research datasets, data analysis, machine learning and big data.
收起
摘要 :
Incomplete data has emerged as a prominent problem in the fields of machine learning, big data and various other academic studies. Due to the surge in deep learning techniques for problem-solving, in this paper, authors have propo...
展开
Incomplete data has emerged as a prominent problem in the fields of machine learning, big data and various other academic studies. Due to the surge in deep learning techniques for problem-solving, in this paper, authors have proposed a deep learning-metaheuristic approach to combat the problem of imputing missing data. The proposed approach (DL-GSA) makes use of the nature inspired metaheuristic, Gravitational search algorithm, in combination with a deep-autoencoder and performs better than existing methods in terms of both accuracy and time. Owing to these improvements, DL-GSA has wider applications in both time and accuracy sensitive areas like imputation of scientific and research datasets, data analysis, machine learning and big data.
收起
摘要 :
One of the Internet of Things promises is interoperability where data is shared and understood between things and applications. Such interoperability aims to achieve a common goal with better efficiency, optimization, and a better...
展开
One of the Internet of Things promises is interoperability where data is shared and understood between things and applications. Such interoperability aims to achieve a common goal with better efficiency, optimization, and a better user experience. However, these things produces data in different formats and semantics making interoperability a real challenge still to be tackled. Linked data is currently positioned as a promising technology capable of addressing the heterogeneity challenge. In this paper, we present an efficient energy storage system which relies on a multi-system semantic representation of several data sources. Our approach analyzes data collected from external services such as weather and billing systems and our internal systems such as the building management system, power monitoring system, and data center system. The aim of the analyzed data coupled with our model enables efficient energy storage usage through forecasting and model predictive control for various purposes such as reducing energy consumed by a utility provider. In this present work, we detail the application model and then demonstrate the capability of this approach through a case study in a large office building.
收起
摘要 :
One of the Internet of Things promises is interoperability where data is shared and understood between things and applications. Such interoperability aims to achieve a common goal with better efficiency, optimization, and a better...
展开
One of the Internet of Things promises is interoperability where data is shared and understood between things and applications. Such interoperability aims to achieve a common goal with better efficiency, optimization, and a better user experience. However, these things produces data in different formats and semantics making interoperability a real challenge still to be tackled. Linked data is currently positioned as a promising technology capable of addressing the heterogeneity challenge. In this paper, we present an efficient energy storage system which relies on a multi-system semantic representation of several data sources. Our approach analyzes data collected from external services such as weather and billing systems and our internal systems such as the building management system, power monitoring system, and data center system. The aim of the analyzed data coupled with our model enables efficient energy storage usage through forecasting and model predictive control for various purposes such as reducing energy consumed by a utility provider. In this present work, we detail the application model and then demonstrate the capability of this approach through a case study in a large office building.
收起
摘要 :
In this paper, we have implemented the Nosé-Hoover chaotic generator by numerically coding the differential equations in Python and modeling them in Xilinx Kintex 7 FPGA development environment using Euler’s, Heun’s, and Runge ...
展开
In this paper, we have implemented the Nosé-Hoover chaotic generator by numerically coding the differential equations in Python and modeling them in Xilinx Kintex 7 FPGA development environment using Euler’s, Heun’s, and Runge Kutta’s 4th order-based algorithms in Verilog HDL. The performance of these algorithms has been analyzed by comparing Slice LUTs used, delay, DSPs, etc. The accuracy of the HDL implementation of each algorithm has been analyzed by calculating Root Mean Square Error(RMSE). In addition, the computation time of PC-based and FPGA-based implementation also have been compared for each algorithm.
收起
摘要 :
In this paper, we have implemented the Nosé-Hoover chaotic generator by numerically coding the differential equations in Python and modeling them in Xilinx Kintex 7 FPGA development environment using Euler’s, Heun’s, and Runge ...
展开
In this paper, we have implemented the Nosé-Hoover chaotic generator by numerically coding the differential equations in Python and modeling them in Xilinx Kintex 7 FPGA development environment using Euler’s, Heun’s, and Runge Kutta’s 4th order-based algorithms in Verilog HDL. The performance of these algorithms has been analyzed by comparing Slice LUTs used, delay, DSPs, etc. The accuracy of the HDL implementation of each algorithm has been analyzed by calculating Root Mean Square Error(RMSE). In addition, the computation time of PC-based and FPGA-based implementation also have been compared for each algorithm.
收起
摘要 :
Automatic speech recognition systems (ASR), such as the recurrent neural network transducer (RNN-T), have reached close to human-like performance and are deployed in commercial applications. However, their core operations depart f...
展开
Automatic speech recognition systems (ASR), such as the recurrent neural network transducer (RNN-T), have reached close to human-like performance and are deployed in commercial applications. However, their core operations depart from the powerful biological counterpart, the human brain. On the other hand, the current developments in biologically-inspired ASR models lag behind in terms of accuracy and focus primarily on small-scale applications. In this work, we revisit the incorporation of biologically-plausible models into deep learning and enhance their capabilities, by taking inspiration from the brain’s diverse neural and synaptic dynamics. In particular, we propose novel deep learning units by introducing neural connectivity concepts emulating the axo-somatic and the axo-axonic synapses and integrate them into the RNN-T architecture. We demonstrate for the first time that such a model can yield performance levels competitive to the state-of-the-art. Moreover, our implementation has a significantly reduced computational cost and a lower latency.
收起
摘要 :
Automatic speech recognition systems (ASR), such as the recurrent neural network transducer (RNN-T), have reached close to human-like performance and are deployed in commercial applications. However, their core operations depart f...
展开
Automatic speech recognition systems (ASR), such as the recurrent neural network transducer (RNN-T), have reached close to human-like performance and are deployed in commercial applications. However, their core operations depart from the powerful biological counterpart, the human brain. On the other hand, the current developments in biologically-inspired ASR models lag behind in terms of accuracy and focus primarily on small-scale applications. In this work, we revisit the incorporation of biologically-plausible models into deep learning and enhance their capabilities, by taking inspiration from the brain’s diverse neural and synaptic dynamics. In particular, we propose novel deep learning units by introducing neural connectivity concepts emulating the axo-somatic and the axo-axonic synapses and integrate them into the RNN-T architecture. We demonstrate for the first time that such a model can yield performance levels competitive to the state-of-the-art. Moreover, our implementation has a significantly reduced computational cost and a lower latency.
收起